Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor : Added Vitest to Requests Screen #2653

Conversation

shivasankaran18
Copy link
Contributor

@shivasankaran18 shivasankaran18 commented Dec 13, 2024

What kind of change does this PR introduce?

Added Vitest to Requests Screen

Issue Number: 2569

Fixes #2569

Did you add tests for your changes?
Yes

Snapshots/Videos:
Screenshot 2024-12-13 175144

If relevant, did you update the documentation?

Summary

Migrated the testing framework to Vitest.
Updated all test files and configurations to be compatible with Vitest's syntax and features.

Have you read the contributing guide?

Yes

Summary by CodeRabbit

  • Tests
    • Enhanced the testing environment for the Requests component with improved mocking of localStorage and window.location.
    • Introduced a new utility function for managing asynchronous waits during tests.
    • Maintained existing test structure while covering various user roles and functionality scenarios.

Copy link
Contributor

coderabbitai bot commented Dec 13, 2024

Walkthrough

The pull request refactors the test suite for the Requests component by migrating from Jest to Vitest. It replaces Jest-specific mocking with Vitest equivalents for localStorage and window.location. Additionally, a new utility function wait is introduced to handle asynchronous operations during tests. The structure of the test cases remains largely unchanged, focusing on various scenarios relevant to the component's functionality.

Changes

File Path Change Summary
src/screens/Requests/Requests.spec.tsx - Replaced Jest mocking with Vitest for localStorage and window.location.
- Added wait utility function for async handling.

Assessment against linked issues

Objective Addressed Explanation
Replace Jest-specific functions and mocks with Vitest equivalents (2569)
Ensure all tests pass after migration using npm run test:vitest (2569)
Maintain 100% test coverage after migration (2569)

Possibly related issues

Possibly related PRs

Suggested labels

refactor

Suggested reviewers

  • pranshugupta54
  • varshith257
  • gautam-divyanshu

Poem

🐰 In the land of code, we hop with glee,
Refactoring tests, as happy as can be!
From Jest to Vitest, a leap we take,
With mocks and waits, for testing's sake!
So let’s celebrate, with a joyful cheer,
For cleaner tests, we hold so dear! 🎉


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Experiment)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

Our Pull Request Approval Process

Thanks for contributing!

Testing Your Code

Remember, your PRs won't be reviewed until these criteria are met:

  1. We don't merge PRs with poor code quality.
    1. Follow coding best practices such that CodeRabbit.ai approves your PR.
  2. We don't merge PRs with failed tests.
    1. When tests fail, click on the Details link to learn more.
    2. Write sufficient tests for your changes (CodeCov Patch Test). Your testing level must be better than the target threshold of the repository
    3. Tests may fail if you edit sensitive files. Ask to add the ignore-sensitive-files-pr label if the edits are necessary.
  3. We cannot merge PRs with conflicting files. These must be fixed.

Our policies make our code better.

Reviewers

Do not assign reviewers. Our Queue Monitors will review your PR and assign them.
When your PR has been assigned reviewers contact them to get your code reviewed and approved via:

  1. comments in this PR or
  2. our slack channel

Reviewing Your Code

Your reviewer(s) will have the following roles:

  1. arbitrators of future discussions with other contributors about the validity of your changes
  2. point of contact for evaluating the validity of your work
  3. person who verifies matching issues by others that should be closed.
  4. person who gives general guidance in fixing your tests

CONTRIBUTING.md

Read our CONTRIBUTING.md file. Most importantly:

  1. PRs with issues not assigned to you will be closed by the reviewer
  2. Fix the first comment in the PR so that each issue listed automatically closes

Other

  1. 🎯 Please be considerate of our volunteers' time. Contacting the person who assigned the reviewers is not advised unless they ask for your input. Do not @ the person who did the assignment otherwise.
  2. Read the CONTRIBUTING.md file make

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (3)
src/screens/Requests/Requests.spec.tsx (3)

23-45: LGTM! Consider adding TypeScript types for better type safety.

The Vitest setup for mocking localStorage and window.location is well-implemented and documented.

Consider adding TypeScript interfaces for better type safety:

interface MockStorage {
  getItem: ReturnType<typeof vi.fn>;
  setItem: ReturnType<typeof vi.fn>;
  clear: ReturnType<typeof vi.fn>;
  removeItem: ReturnType<typeof vi.fn>;
}

interface MockLocation {
  href: string;
  assign: ReturnType<typeof vi.fn>;
  reload: ReturnType<typeof vi.fn>;
}

57-63: Consider enhancing the wait utility function.

While the implementation is correct, consider these improvements for better reliability:

 async function wait(ms = 100): Promise<void> {
+  if (ms < 0) {
+    throw new Error('Timeout must be non-negative');
+  }
   await act(() => {
-    return new Promise((resolve) => {
+    return new Promise((resolve, reject) => {
+      if (process.env.CI) {
+        ms *= 2; // Double the timeout in CI environment
+      }
       setTimeout(resolve, ms);
     });
   });
 }

This enhancement:

  • Validates the timeout value
  • Doubles the timeout in CI environments where operations might be slower

Line range hint 1-300: Consider improving test implementation for better maintainability.

The test suite is comprehensive but could benefit from these improvements:

  1. Consider extracting magic numbers into named constants:
const TIMEOUT = {
  DEFAULT: 100,
  EXTENDED: 200,
  CI: process.env.CI ? 400 : 200
} as const;
  1. Consider grouping related tests using nested describe blocks:
describe('Requests Screen', () => {
  describe('Rendering', () => {
    test('renders properly for admin users', ...);
    test('renders properly for non-admin users', ...);
  });
  
  describe('Search Functionality', () => {
    test('handles valid search input', ...);
    test('handles empty search results', ...);
  });
});
  1. Consider adding data-testid attributes consistently:
const TEST_IDS = {
  COMPONENT: 'testComp',
  SEARCH_INPUT: 'searchByName',
  SEARCH_BUTTON: 'searchButton'
} as const;

These changes would improve maintainability and make the tests more structured.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a5c9d97 and 09b4d38.

📒 Files selected for processing (1)
  • src/screens/Requests/Requests.spec.tsx (2 hunks)
🔇 Additional comments (1)
src/screens/Requests/Requests.spec.tsx (1)

Line range hint 1-300: Verify test configuration and cleanup.

Let's ensure the test setup is complete and properly handles cleanup.

✅ Verification successful

Test configuration and cleanup are properly implemented

The test setup is well-configured with:

  • Proper Vitest configuration with coverage settings and environment setup
  • Consistent use of beforeEach/afterEach hooks across test files
  • Global cleanup through localStorage.clear() in afterEach
  • Proper test environment setup with @testing-library/jest-dom
  • Correct Vitest imports and mocking utilities

The current implementation follows testing best practices and properly handles cleanup to prevent memory leaks.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for proper Vitest configuration and potential memory leaks

# Check if vitest config exists and contains proper setup
echo "Checking Vitest configuration..."
fd -t f "vitest.config" -x cat {}

# Check for afterEach cleanup in other test files
echo "Checking cleanup patterns in other test files..."
rg "afterEach|cleanup|beforeEach" -t typescript -t javascript

# Check for proper import patterns
echo "Checking Vitest import patterns..."
rg "from ['|\"]vitest['|\"]" -t typescript -t javascript

Length of output: 1383


Script:

#!/bin/bash
# Let's try with correct file type flags for ripgrep

# Check for cleanup patterns in test files
echo "Checking cleanup patterns in test files..."
rg "afterEach|cleanup|beforeEach" --type-add 'ts:*.{ts,tsx}' --type-add 'js:*.{js,jsx}' --type ts --type js

# Check for Vitest import patterns
echo "Checking Vitest import patterns..."
rg "from ['|\"]vitest['|\"]" --type-add 'ts:*.{ts,tsx}' --type-add 'js:*.{js,jsx}' --type ts --type js

# Check for setup files
echo "Checking Vitest setup file..."
fd "vitest.setup" -x cat {}

Length of output: 9431

@palisadoes
Copy link
Contributor

We have a policy of unassigning contributors who close PRs without getting validation from our reviewer team. This is because:

  1. We start looking for people to review PRs when you submit them.
  2. We often contact them and link to the PR. If the PR is closed the whole effort is wasted.
  3. The historical thread of reviewer comments is broken when the work is spread across multiple PRs. The quality of our code is affected negatively.

Please be considerate of our volunteers' limited time and our desire to improve our code base.

This policy is stated as a pinned post in all our Talawa repositories. Our YouTube videos explain why this practice is not acceptable to our Community.

Unfortunately, if this continues we will have to close the offending PR and unassign you from the issue.

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants